- Get these premium Sony Bravia home theater speakers for $500 off during Black Friday
- The best Black Friday soundbar and speaker deals: Save on Bose, Sonos, Beats, and more
- One of the best pool-cleaning robots I've tested is $450 off for Prime Day
- Apple's M2 MacBook Air is on sale for $749 for Black Friday
- I replaced my desktop with this MSI laptop for a week, and it surpassed my expectations
Security isn’t convenient, and that’s a big problem in the age of AI
Security isn’t convenient. For most of us, traditional cybersecurity protocols like CAPTCHAs, two-factor authentication and passwords can feel like barriers that stand in the way of productivity. Whether that’s making a purchase, accessing workplace data or trying to log into an app that you were inadvertently kicked out of — we want what we want, we want it now and we often want it at the expense of our own data security and privacy. In fact, a surprising 29% of consumers have abandoned creating an online account due to too many security steps.
At the same time, our digital world isn’t getting safer. While the rise of generative AI promises a transformation in productivity at work and at home, these tools are also enabling the evolution of the global threat landscape at a pace we never could have imagined. Without strong guardrails and stringent regulation, AI will facilitate increasingly advanced cyber threats that could have lasting societal implications across our economy, democracy and within our daily lives. Just recently, a Joe Biden deep fake told Democrats to stay home during the New Hampshire primary, and a finance employee in Hong Kong wired $25M to a fraudster that created a deep fake of their company’s CFO.
With threats looming, here are three considerations for consumers and businesses to protect data privacy in the age of AI.
Data is a prized possession
A recent study from Pew Research Center found that 67% of U.S. adults don’t understand what companies are doing with their personal data. This is not acceptable. As individuals, we must be in the driver’s seat and take ownership of our data as it’s our most prized possession. That will likely require not only training and awareness, but also education and empowerment. The United States has historically lagged behind Europe when it comes to data privacy regulation, but individuals and organizations alike must push for federal standards surrounding trust and transparency in the coming years. With strong data privacy protections and a clear understanding of who owns and uses our data, we can develop trust in AI and work towards productive, powerful and ethical use of these tools.
Trust is the new currency
However, when it comes to AI, trust is difficult to assess. Do we trust the systems that we’re using when we turn over our data? Do we trust that AI systems are being used ethically and producing factual outputs? Are companies and governments responsible for negative consequences of AI deployment? These are questions that must be answered to establish true public trust in AI and to reap the many rewards this technology can deliver. Trust cannot be developed in a silo and will require collaboration from global leaders in intellectual property, data privacy, cybersecurity and antitrust law and policy, along with technologists in computer science, engineering and machine learning.
AI will exploit every crack in the system
AI is a powerful tool for society — and that includes the hackers that will use it to exploit every weakness and flaw in our global cybersecurity infrastructure. For example, AI models can be trained on malware to create code that can evade detection. As businesses rush AI deployment to power competitive strategy and productivity for employees and customers, the guardrails around data security and privacy must be strengthened. This means securing and encrypting data used to train AI and monitoring and auditing data quality regularly. We protect what we treasure and improve what we measure. Trustworthy AI-powered solutions must be built on a foundation of accurate, complete and consistent data. Every business using or developing AI must be held to this standard, to ensure critical data is safeguarded from threat.
Although we’re still in the early innings of AI adoption, organizations (as well as employees, citizens and world leaders alike) must pay very close attention to the development and proliferation of this technology. In the coming years, AI has the potential to usher in a period of great transformation and convenience across every facet of our lives. With this, strict data security protocols and encryption, and establishing guardrails for trust in the technology, will be crucial to safe adoption and to protecting our privacy.
Dana Simberkoff is the Chief Risk, Privacy and Information Security Officer for AvePoint, Inc. She is responsible for executive level consulting, research and analytical support on current and upcoming industry trends, technology, standards, best practices, concepts and solutions for risk management and compliance. (e.g. Privacy, Information Security and Information Assurance, Data Governance, and Compliance).